Stochastic Zeroth-Order Functional Constrained Optimization: Oracle Complexity and Applications
نویسندگان
چکیده
Functionally constrained stochastic optimization problems, where neither the objective function nor constraint functions are analytically available, arise frequently in machine learning applications. In this work, assuming we only have access to noisy evaluations of and functions, propose analyze zeroth-order algorithms for solving class problem. When domain is [Formula: see text], there m establish oracle complexities order text] convex nonconvex settings, respectively, ϵ represents accuracy solutions required appropriately defined metrics. The established are, our knowledge, first such results literature functionally problems. We demonstrate applicability by illustrating their superior performance on problem hyperparameter tuning sampling neural network training. Funding: K. Balasubramanian was partially supported a seed grant from Center Data Science Artificial Intelligence Research, University California–Davis, National Foundation [Grant DMS-2053918].
منابع مشابه
Stochastic Zeroth-order Optimization in High Dimensions
We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries. Under sparsity assumptions on the gradients or function values, we present two algorithms: a successive component/feature selection algorithm and a noisy mirror descent algorithm using Lasso gradient estimates, and show that both algorithms have convergence rates that depend only loga...
متن کاملOn Zeroth-Order Stochastic Convex Optimization via Random Walks
We propose a method for zeroth order stochastic convex optimization that attains the suboptimality rate of Õ(n7T−1/2) after T queries for a convex bounded function f : R → R. The method is based on a random walk (the Ball Walk) on the epigraph of the function. The randomized approach circumvents the problem of gradient estimation, and appears to be less sensitive to noisy function evaluations c...
متن کاملStochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method pos...
متن کاملOracle Complexity of Second-Order Methods for Smooth Convex Optimization
Second-order methods, which utilize gradients as well as Hessians to optimize a given function, are of major importance in mathematical optimization. In this work, we study the oracle complexity of such methods, or equivalently, the number of iterations required to optimize a function to a given accuracy. Focusing on smooth and convex functions, we derive (to the best of our knowledge) the firs...
متن کاملA Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order
Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: INFORMS journal on optimization
سال: 2022
ISSN: ['2575-1484', '2575-1492']
DOI: https://doi.org/10.1287/ijoo.2022.0085